skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Li, Mingchen"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available February 25, 2026
  2. Free, publicly-accessible full text available December 9, 2025
  3. Free, publicly-accessible full text available December 31, 2025
  4. Standard federated optimization methods successfully apply to stochastic problems with singlelevel structure. However, many contemporary ML problems – including adversarial robustness, hyperparameter tuning, actor-critic – fall under nested bilevel programming that subsumes minimax and compositional optimization. In this work, we propose FEDNEST: A federated alternating stochastic gradient method to address general nested problems. We establish provable convergence rates for FEDNEST in the presence of heterogeneous data and introduce variations for bilevel, minimax, and compositional optimization. FEDNEST introduces multiple innovations including federated hypergradient computation and variance reduction to address inner-level heterogeneity. We complement our theory with experiments on hyperparameter & hyper-representation learning and minimax optimization that demonstrate the benefits of our method in practice. 
    more » « less
  5. null (Ed.)
    Neural Architecture Search (NAS) is a popular method for automatically designing optimized architectures for high-performance deep learning. In this approach, it is common to use bilevel optimization where one optimizes the model weights over the training data (lower-level problem) and various hyperparameters such as the configuration of the architecture over the validation data (upper-level problem). This paper explores the statistical aspects of such problems with train-validation splits. In practice, the lower-level problem is often overparameterized and can easily achieve zero loss. Thus, a-priori it seems impossible to distinguish the right hyperparameters based on training loss alone which motivates a better understanding of the role of train-validation split. To this aim this work establishes the following results: • We show that refined properties of the validation loss such as risk and hyper-gradients are indicative of those of the true test loss. This reveals that the upper-level problem helps select the most generalizable model and prevent overfitting with a near-minimal validation sample size. Importantly, this is established for continuous spaces – which are highly relevant for popular differentiable search schemes. • We establish generalization bounds for NAS problems with an emphasis on an activation search problem. When optimized with gradient-descent, we show that the train-validation procedure returns the best (model, architecture) pair even if all architectures can perfectly fit the training data to achieve zero error. • Finally, we highlight rigorous connections between NAS, multiple kernel learning, and low-rank matrix learning. The latter leads to novel algorithmic insights where the solution of the upper problem can be accurately learned via efficient spectral methods to achieve near-minimal risk. 
    more » « less
  6. null (Ed.)
    SUMMARY In this article, we investigate the problem of parameter identification of spatial–temporal varying processes described by a general nonlinear partial differential equation and validate the feasibility and robustness of the proposed algorithm using a group of coordinated mobile robots equipped with sensors in a realistic diffusion field. Based on the online parameter identification method developed in our previous work using multiple mobile robots, in this article, we first develop a parameterized model that represents the nonlinear spatially distributed field, then develop a parameter identification scheme consisting of a cooperative Kalman filter and recursive least square method. In the experiments, we focus on the diffusion field and consider the realistic scenarios that the diffusion field contains obstacles and hazard zones that the robots should avoid. The identified parameters together with the located source could potentially assist in the reconstruction and monitoring of the field. To validate the proposed methods, we generate a controllable carbon dioxide (CO 2 ) field in our laboratory and build a static CO 2 sensor network to measure and calibrate the field. With the reconstructed realistic diffusion field measured by the sensor network, a multi-robot system is developed to perform the parameter identification in the field. The results of simulations and experiments show satisfactory performance and robustness of the proposed algorithms. 
    more » « less